机器学习模型表现出两个看似矛盾的现象:训练数据记忆和各种遗忘形式。在记忆中,模型过于适合特定的培训示例,并容易受到隐私攻击的影响。在忘记时,最终忘记了在培训初期出现的例子。在这项工作中,我们将这些现象联系起来。我们提出了一种技术,以衡量训练示例的细节在多大程度上``忘记'',从而不易受到他们最近未曾见过的示例的隐私攻击的影响。我们表明,尽管非凸性可以防止在最坏的情况下忘记发生,但标准图像和语音模型在经验上确实会随着时间的流逝而忘记示例。我们将非确定性识别为潜在的解释,表明经过确定性训练的模型不会忘记。我们的结果表明,当使用极大的数据集培训(例如用于预训练模型的示例)时,早期看到的例子可能会观察到隐私益处,而牺牲了后来看到的示例。
translated by 谷歌翻译
我们介绍伯克利填字游戏求解器,这是一种自动解决填字游戏的最先进方法。我们的系统通过使用神经问题答案模型为每个填字游戏生成答案候选者,然后将loopy信念传播与本地搜索结合在一起,以找到完整的拼图解决方案。与现有方法相比,我们的系统将精确的拼图准确性从《纽约时报》的填字游戏中提高到82%,并获得了无主题难题的99.9%的字母准确性。此外,在2021年,我们系统的混合动力车和现有的博士系统在美国填字游戏中首次优于所有人类竞争对手。为了促进问题回答和填字游戏解决方案的研究,我们分析了系统的剩余错误,并发布了超过600万个问答对的数据集。
translated by 谷歌翻译
Past work has shown that large language models are susceptible to privacy attacks, where adversaries generate sequences from a trained model and detect which sequences are memorized from the training set. In this work, we show that the success of these attacks is largely due to duplication in commonly used web-scraped training sets. We first show that the rate at which language models regenerate training sequences is superlinearly related to a sequence's count in the training set. For instance, a sequence that is present 10 times in the training data is on average generated ~1000 times more often than a sequence that is present only once. We next show that existing methods for detecting memorized sequences have near-chance accuracy on non-duplicated training sequences. Finally, we find that after applying methods to deduplicate training data, language models are considerably more secure against these types of privacy attacks. Taken together, our results motivate an increased focus on deduplication in privacy-sensitive applications and a reevaluation of the practicality of existing privacy attacks.
translated by 谷歌翻译
为了创建在广泛的测试输入中强大的模型,培训数据集应包括跨越许多现象的各种示例。动态的对抗数据收集(DADC),注释者制作的示例挑战不断改进模型的示例,其有望是生成这种多样化训练集的一种方法。先前的工作表明,在1-3轮中运行DADC可以帮助模型修复某些错误类型,但不一定会带来更好的概括,而不是对抗性测试数据。我们认为,在许多回合中运行DADC可以最大化其训练时间的好处,因为不同的回合可以涵盖许多与任务相关的现象。我们介绍了长期DADC的第一个研究,其中我们收集了20轮NLI示例,用于一小部分前提,并采用对抗性和非对抗性方法。与接受非对抗数据的训练的模型相比,接受DADC示例培训的模型在我们的专家策划测试集中的错误少26%。我们的分析表明,DADC产生的例子更加困难,更词法和句法多样性,并且与非对抗性示例相比,注释伪像更少。
translated by 谷歌翻译
It has become common to publish large (billion parameter) language models that have been trained on private datasets. This paper demonstrates that in such settings, an adversary can perform a training data extraction attack to recover individual training examples by querying the language model. We demonstrate our attack on GPT-2, a language model trained on scrapes of the public Internet, and are able to extract hundreds of verbatim text sequences from the model's training data. These extracted examples include (public) personally identifiable information (names, phone numbers, and email addresses), IRC conversations, code, and 128-bit UUIDs. Our attack is possible even though each of the above sequences are included in just one document in the training data.We comprehensively evaluate our extraction attack to understand the factors that contribute to its success. Worryingly, we find that larger models are more vulnerable than smaller models. We conclude by drawing lessons and discussing possible safeguards for training large language models.
translated by 谷歌翻译
The remarkable success of pretrained language models has motivated the study of what kinds of knowledge these models learn during pretraining. Reformulating tasks as fillin-the-blanks problems (e.g., cloze tests) is a natural approach for gauging such knowledge, however, its usage is limited by the manual effort and guesswork required to write suitable prompts. To address this, we develop AUTOPROMPT, an automated method to create prompts for a diverse set of tasks, based on a gradient-guided search. Using AUTO-PROMPT, we show that masked language models (MLMs) have an inherent capability to perform sentiment analysis and natural language inference without additional parameters or finetuning, sometimes achieving performance on par with recent state-of-the-art supervised models. We also show that our prompts elicit more accurate factual knowledge from MLMs than the manually created prompts on the LAMA benchmark, and that MLMs can be used as relation extractors more effectively than supervised relation extraction models. These results demonstrate that automatically generated prompts are a viable parameter-free alternative to existing probing methods, and as pretrained LMs become more sophisticated and capable, potentially a replacement for finetuning.
translated by 谷歌翻译
关于比较治疗效果的最佳证据来自临床试验,其结果在非结构化的文章中据报道。医疗专家必须手动提取文章中的信息以告知决策,这是耗时和昂贵的。在这里,我们考虑(a)从描述临床试验(实体识别)的全文物品中提取治疗和结果的端到端任务,(b)推断前者的报告结果(关系萃取)。我们为此任务介绍了新数据,并评估最近在自然语言处理中获得类似任务的最先进结果的模型。然后,我们提出了一种新的方法,激励了通常介绍了如何呈现这些纯粹数据驱动的基线的试验结果。最后,我们对该模型进行了一定的评估,并具有非营利性寻求鉴定可能重新用癌症的现有药物,显示出端到端证据提取系统的潜在效用。
translated by 谷歌翻译
Non-linear state-space models, also known as general hidden Markov models, are ubiquitous in statistical machine learning, being the most classical generative models for serial data and sequences in general. The particle-based, rapid incremental smoother PaRIS is a sequential Monte Carlo (SMC) technique allowing for efficient online approximation of expectations of additive functionals under the smoothing distribution in these models. Such expectations appear naturally in several learning contexts, such as likelihood estimation (MLE) and Markov score climbing (MSC). PARIS has linear computational complexity, limited memory requirements and comes with non-asymptotic bounds, convergence results and stability guarantees. Still, being based on self-normalised importance sampling, the PaRIS estimator is biased. Our first contribution is to design a novel additive smoothing algorithm, the Parisian particle Gibbs PPG sampler, which can be viewed as a PaRIS algorithm driven by conditional SMC moves, resulting in bias-reduced estimates of the targeted quantities. We substantiate the PPG algorithm with theoretical results, including new bounds on bias and variance as well as deviation inequalities. Our second contribution is to apply PPG in a learning framework, covering MLE and MSC as special examples. In this context, we establish, under standard assumptions, non-asymptotic bounds highlighting the value of bias reduction and the implicit Rao--Blackwellization of PPG. These are the first non-asymptotic results of this kind in this setting. We illustrate our theoretical results with numerical experiments supporting our claims.
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
This paper introduces a novel algorithm, the Perturbed Proximal Preconditioned SPIDER algorithm (3P-SPIDER), designed to solve finite sum non-convex composite optimization. It is a stochastic Variable Metric Forward-Backward algorithm, which allows approximate preconditioned forward operator and uses a variable metric proximity operator as the backward operator; it also proposes a mini-batch strategy with variance reduction to address the finite sum setting. We show that 3P-SPIDER extends some Stochastic preconditioned Gradient Descent-based algorithms and some Incremental Expectation Maximization algorithms to composite optimization and to the case the forward operator can not be computed in closed form. We also provide an explicit control of convergence in expectation of 3P-SPIDER, and study its complexity in order to satisfy the epsilon-approximate stationary condition. Our results are the first to combine the composite non-convex optimization setting, a variance reduction technique to tackle the finite sum setting by using a minibatch strategy and, to allow deterministic or random approximations of the preconditioned forward operator. Finally, through an application to inference in a logistic regression model with random effects, we numerically compare 3P-SPIDER to other stochastic forward-backward algorithms and discuss the role of some design parameters of 3P-SPIDER.
translated by 谷歌翻译